15 research outputs found

    Social learning against data falsification in sensor networks

    Get PDF
    Sensor networks generate large amounts of geographically-distributed data. The conventional approach to exploit this data is to first gather it in a special node that then performs processing and inference. However, what happens if this node is destroyed, or even worst, if it is hijacked? To explore this problem, in this work we consider a smart attacker who can take control of critical nodes within the network and use them to inject false information. In order to face this critical security thread, we propose a novel scheme that enables data aggregation and decision-making over networks based on social learning, where the sensor nodes act resembling how agents make decisions in social networks. Our results suggest that social learning enables high network resilience, even when a significant portion of the nodes have been compromised by the attacker

    Crowdsourcing Controls: A Review and Research Agenda for Crowdsourcing Controls Used for Macro-tasks

    Full text link
    Crowdsourcing—the employment of ad hoc online labor to perform various tasks—has become a popular outsourcing vehicle. Our current approach to crowdsourcing—focusing on micro-tasks—fails to leverage the potential of crowds to tackle more complex problems. To leverage crowds to tackle more complex macro tasks requires a better comprehension of crowdsourcing controls. Crowdsourcing controls are mechanisms used to align crowd workers’ actions with predefined standards to achieve a set of goals and objectives. Unfortunately, we know very little about the topic of crowdsourcing controls directed at accomplishing complex macro tasks. To address issues associated with crowdsourcing controls formacro-tasks, this chapter has several objectives. First, it presents and discusses the literature on control theory. Second, this chapter presents a scoping literature review of crowdsourcing controls. Finally, the chapter identifies gaps and puts forth a research agenda to address these shortcomings. The research agenda focuses on understanding how to employ the controls needed to perform macro-tasking in crowds and the implications for crowdsourcing system designers.National Science Foundation grant CHS-1617820Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/150493/1/Robert 2019 Preprint Chapter 3.pdfDescription of Robert 2019 Preprint Chapter 3.pdf : PrePrint Versio

    Validation of genotype cluster investigations for Mycobacterium tuberculosis: application results for 44 clusters from four heterogeneous United States jurisdictions

    No full text
    Abstract Background Tracking the dissemination of specific Mycobacterium tuberculosis (Mtb) strains using genotyped Mtb isolates from tuberculosis patients is a routine public health practice in the United States. The present study proposes a standardized cluster investigation method to identify epidemiologic-linked patients in Mtb genotype clusters. The study also attempts to determine the proportion of epidemiologic-linked patients the proposed method would identify beyond the outcome of the conventional contact investigation. Methods The study population included Mtb culture positive patients from Georgia, Maryland, Massachusetts and Houston, Texas. Mtb isolates were genotyped by CDC’s National TB Genotyping Service (NTGS) from January 2006 to October 2010. Mtb cluster investigations (CLIs) were conducted for patients whose isolates matched exactly by spoligotyping and 12-locus MIRU-VNTR. CLIs were carried out in four sequential steps: (1) Public Health Worker (PHW) Interview, (2) Contact Investigation (CI) Evaluation, (3) Public Health Records Review, and (4) CLI TB Patient Interviews. Comparison between patients whose links were identified through the study’s CLI interviews (Step 4) and patients whose links were identified earlier in CLI (Steps 1–3) was conducted using logistic regression. Results Forty-four clusters were randomly selected from the four study sites (401 patients in total). Epidemiologic links were identified for 189/401 (47 %) study patients in a total of 201 linked patient-pairs. The numbers of linked patients identified in each CLI steps were: Step 1 - 105/401 (26.2 %), Step 2 - 15/388 (3.9 %), Step 3 - 41/281 (14.6 %), and Step 4 - 28/119 (30 %). Among the 189 linked patients, 28 (14.8 %) were not identified in previous CI. No epidemiologic links were identified in 13/44 (30 %) clusters. Conclusions We validated a standardized and practical method to systematically identify epidemiologic links among patients in Mtb genotype clusters, which can be integrated into the TB control and prevention programs in public health settings. The CLI interview identified additional epidemiologic links that were not identified in previous CI. One-third of the clusters showed no epidemiologic links despite being extensively investigated, suggesting that some improvement in the interviewing methods is still needed

    Single-player monte-carlo tree search

    No full text
    Abstract. Classical methods such as A * and IDA * are a popular and successful choice for one-player games. However, they fail without an accurate admissible evaluation function. In this paper we investigate whether Monte-Carlo Tree Search (MCTS) is an interesting alternative for one-player games where A * and IDA * methods do not perform well. Therefore, we propose a new MCTS variant, called Single-Player Monte-Carlo Tree Search (SP-MCTS). The selection and backpropagation strategy in SP-MCTS are different from standard MCTS. Moreover, SP-MCTS makes use of a straightforward Meta-Search extension. We tested the method on the puzzle SameGame. It turned out that our SP-MCTS program gained the highest score so far on the standardized test set.

    Deriving Filtering Algorithms from Constraint Checkers

    No full text
    This report deals with global constraints for which the set of solutions can be recognized by an extended finite automaton whose size is bounded by a polynomial in nn, where nn is the number of variables of the corresponding global constraint. By reformulating the automaton as a conjunction of signature and transition constraints we show how to systematically obtain a filtering algorithm. Under some restrictions on the signature and transition constraints this filtering algorithm achieves arc-consistency. An implementation based on some constraints as well as on the metaprogramming facilities of SICStus Prolog is available. For a restricted class of automata we provide a filtering algorithm for the relaxed case, where the violation cost is the minimum number of variables to unassign in order to get back to a solution
    corecore